Goto

Collaborating Authors

 vulnerable language


Roundtables: Surviving the New Age of Conspiracies

MIT Technology Review

Watch a subscriber-only conversation unpacking our new series, "The New Conspiracy Age," and how this moment is changing science and technology. Everything is a conspiracy theory now. Watch a discussion with our editors and Mike Rothschild, journalist and conspiracy theory expert, about how we can make sense of them all. What it's like to be in the middle of a conspiracy theory (according to a conspiracy theory expert) It's surprisingly easy to stumble into a relationship with an AI chatbot Rhiannon Williams OpenAI's new LLM exposes the secrets of how AI really works Will Douglas Heaven It's surprisingly easy to stumble into a relationship with an AI chatbot The idea that machines will be as smart as--or smarter than--humans has hijacked an entire industry. But look closely and you'll see it's a myth that persists for many of the same reasons conspiracies do. The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways--and how trustworthy they really are.


Scaling innovation in manufacturing with AI

MIT Technology Review

AI integration modernizes factory operations and enables manufacturers to achieve greater business results. Manufacturing is getting a major system upgrade. As AI amplifies existing technologies--like digital twins, the cloud, edge computing, and the industrial internet of things (IIoT)--it is enabling factory operations teams to shift from reactive, isolated problem-solving to proactive, systemwide optimization. Digital twins--physically accurate virtual representations of a piece of equipment, a production line, a process, or even an entire factory--allow workers to test, optimize, and contextualize complex, real-world environments. Manufacturers are using digital twins to simulate factory environments with pinpoint detail. "AI-powered digital twins mark a major evolution in the future of manufacturing, enabling real-time visualization of the entire production line, not just individual machines," says Indranil Sircar, global chief technology officer for the manufacturing and mobility industry at Microsoft.


Quantum physicists have shrunk and "de-censored" DeepSeek R1

MIT Technology Review

A group of quantum physicists claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. The scientists at Multiverse Computing, a Spanish firm specializing in quantum-inspired AI techniques, created DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. Crucially, they also claim to have eliminated official Chinese censorship from the model. In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and "socialist values." As a result, companies build in layers of censorship when training the AI systems.


Networking for AI: Building the foundation for real-time intelligence

MIT Technology Review

AI inference-ready networks are essential infrastructure for turning AI's potential into performance. The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways. From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup's infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day. To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations.


Realizing value with AI inference at scale and in production

MIT Technology Review

Training an AI model to predict equipment failures is an engineering achievement. But it's not until prediction meets action--the moment that model successfully flags a malfunctioning machine--that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line. Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes the true value of AI lies in inference". Inference is where AI earns its keep. It's the operational layer that puts all that training to use in real-world workflows.


Google's new Gemini 3 "vibe-codes" responses and comes with its own agent

MIT Technology Review

Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls "generative interfaces," which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as "How many days are you traveling?" or "What kinds of activities do you enjoy?" It also presents clickable options based on what you might want next. When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective.


What is the chance your plane will be hit by space debris?

MIT Technology Review

What is the chance your plane will be hit by space debris? Explains: Let our writers untangle the complex, messy world of technology to help you understand what's coming next. In mid-October, a mysterious object cracked the windshield of a packed Boeing 737 cruising at 36,000 feet above Utah, forcing the pilots into an emergency landing. The internet was suddenly buzzing with the prospect that the plane had been hit by a piece of space debris. We still don't know exactly what hit the plane--likely a remnant of a weather balloon--but it turns out the speculation online wasn't that far-fetched. That's because while the risk of flights being hit by space junk is still small, it is, in fact, growing.


What's Next for AI?

MIT Technology Review

President of Microsoft Research Peter Lee elaborates on the future of AI, uncovering emerging trends, hidden opportunities, and breakthrough innovations that are not yet visible to most. It's surprisingly easy to stumble into a relationship with an AI chatbot Rhiannon Williams It's surprisingly easy to stumble into a relationship with an AI chatbot Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages? Marcin Jakubowski is compiling a DIY set of society's essential machines and making it open-source. The idea that machines will be as smart as--or smarter than--humans has hijacked an entire industry. But look closely and you'll see it's a myth that persists for many of the same reasons conspiracies do.


EmTech AI 2025: How AI is revolutionizing science

MIT Technology Review

President of Microsoft Research Peter Lee elaborates on the future of AI, uncovering emerging trends, hidden opportunities, and breakthrough innovations that are not yet visible to most. It's surprisingly easy to stumble into a relationship with an AI chatbot Rhiannon Williams It's surprisingly easy to stumble into a relationship with an AI chatbot Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages? Marcin Jakubowski is compiling a DIY set of society's essential machines and making it open-source. The idea that machines will be as smart as--or smarter than--humans has hijacked an entire industry. But look closely and you'll see it's a myth that persists for many of the same reasons conspiracies do.


OpenAI's new LLM exposes the secrets of how AI really works

MIT Technology Review

The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways--and how trustworthy they really are. ChatGPT maker OpenAI has built an experimental large language model that is far easier to understand than typical models. That's a big deal, because today's LLMs are black boxes: Nobody fully understands how they do what they do. Building a model that is more transparent sheds light on how LLMs work in general, helping researchers figure out why models hallucinate, why they go off the rails, and just how far we should trust them with critical tasks. "As these AI systems get more powerful, they're going to get integrated more and more into very important domains," Leo Gao, a research scientist at OpenAI, told in an exclusive preview of the new work. "It's very important to make sure they're safe."